home *** CD-ROM | disk | FTP | other *** search
- Short: AGF V0.9 - n*8-bit Sample Pre-Packing Processor
- Author: olethros@geocities.com (Christos Dimitrakakis)
- Uploader: olethros@geocities.com (Christos Dimitrakakis)
- Type: util/pack
- Requires: 68020+ (fpu opt.)
-
- OVERVIEW
-
- AGF is a sample pre-processor. It transofrms the data into a form having very
- little information content. This makes it easier for compression programs to
- pack it down to a small size. AGF combined with GZIP gives an average
- compression of 50% and it is always better than any other compression method on
- its own. It is similar to ADPCM, but better :)
-
- HISTORY
-
- 06-09-1999 : Released a version that works :)
- 05-09-1999 : Released a version that works properly (more or less)
-
- SUMMARY
-
- AGF - Adaptive Gradient-descent FIR filter.
-
- This is a neural-network-like adaptive FIR filter, employing a neural
- network of 32 neurons. The adaptation is deterministic, which means
- that the sample can be recovered from the processed file without
- needing to save an FIR coefficients to it as well. Adaptation is done
- on-line, on a sample-by-sample basis.
-
- USAGE
-
- AGF.fpu MODE sample processed_sample
- AGF.int MODE sample processed_sample
-
- The processed sample can then be efficiently packed with any kind of packer.
- I recommend xpk (xGZIP or xSQSH). lha/lzx will also do :)
- The results are always MUCH better.
-
- Modes:
- x : extract (decode) using a linear ANN
- c : compress (encode) using a linear ANN
- xd : extract (decode) using a static filter
- cd : compress (encode) using a static filter
-
- AGF.fpu & AGF.int, implement the same algorithm using floating point and fixed
- point representations respectively. The first one is compiled specifically for
- 68060 with FPU and the second for 68060 (using the math libs for any FPU
- instructions.. which are only a couple). The integer version is twice as fast
- on my 68030+68882.. and the packing performance difference is negligible. I
- expect the int version to be also faster on 060 machines (lots of MULs), but
- maybe the .fpu version is faster on 040.. test it..
-
-
- OUTPUT
-
- It outputs the average error of the ANN predictor and when it finishes it shows
- the values of the ANN weights.. in case you are interested :)
-
-
- TODO
-
- Add an RBF layer before the 32-neuron layer.
- Make an xpksublib out of it.
- Add options for adjusting the number of coefficients and adaptation rate.
-
-
- BUGS
-
- Bugs Reports to olethros@geocities.com with "AGF BUG" as the subject message please
-
- SEE ALSO
-
- see also dev/basic/gasp.lha for a similar pre-processor where the adaptive
- process is controlled by a Genetic Algorithm
-
-
- ============================= Archive contents =============================
-
- Original Packed Ratio Date Time Name
- -------- ------- ----- --------- -------- -------------
- 2564 1317 48.6% 06-Sep-99 19:41:02 agf.readme
- 17928 8456 52.8% 06-Sep-99 19:31:30 agf.int
- 17000 8245 51.5% 06-Sep-99 19:32:02 agf.fpu
- 1744 677 61.1% 06-Sep-99 19:21:08 agf.c
- 1421 606 57.3% 06-Sep-99 19:27:34 fir.c
- 1534 544 64.5% 19-Jan-99 14:59:16 main.c
- 233 129 44.6% 19-Jan-99 14:36:48 agf.h
- 366 194 46.9% 06-Sep-99 19:28:54 fir.h
- -------- ------- ----- --------- --------
- 42790 20168 52.8% 07-Sep-99 23:40:30 8 files
-